我们研究在线交互式强盗设置中的非模块化功能。我们是受到某些元素之间自然互补性的应用程序的动机:这仅使用只能代表元素之间竞争力的下函数来表达这一点。我们通过两种方式扩展了纯粹的下二次方法。首先,我们假设该物镜可以分解为单调下模量和超模块函数的总和,称为BP物镜。在这里,互补性自然是由超模型成分建模的。我们开发了UCB风格的算法,在每一轮比赛中,在采取行动以平衡对未知目标(探索)和选择似乎有希望的行动(剥削)的行动之间揭示的嘈杂收益。根据全知识的贪婪基线来定义遗憾和超模块化曲率,我们表明该算法最多可以在$ o(\ sqrt {t})$ hore $ t $ t $ t $ the $ t $ t $ the $ t $ t $ the $ the。其次,对于那些不承认BP结构的功能,我们提供了类似的遗憾保证,从其表现比率角度来看。这适用于几乎但不完全是子模型的功能。我们在数值上研究了Movielens数据集上电影推荐的任务,并选择用于分类的培训子集。通过这些示例,我们证明了该算法的性能以及将这些问题视为单次生管的缺点。
translated by 谷歌翻译
学习问题通常表现出一个有趣的反馈机制,其中人口数据对竞争决策者的行为作出反应。本文为这种现象制定了一种新的游戏理论框架,称为多人执行预测。我们专注于两个不同的解决方案概念,即(i)表现稳定稳定的均衡和(ii)纳什均衡的比赛。后者均衡可以说是更具信息性的,但只有在游戏是单调时才有效地发现。我们表明,在温和的假设下,可以通过各种算法有效地发现所需稳定的均衡,包括重复再培训和重复(随机)梯度播放。然后,我们为游戏的强大单调性建立透明的充分条件,并使用它们开发用于查找纳什均衡的算法。我们研究了衍生免费方法和自适应梯度算法,其中每个玩家在学习其分发和梯度步骤的学习的分配和梯度步骤之间交替。合成和半合成数值实验说明了结果。
translated by 谷歌翻译
迄今为止,游戏中的学习研究主要集中在正常形式游戏上。相比之下,我们以广泛的形式游戏(EFG),尤其是在许多代理商远远落后的EFG中对学习的理解,尽管它们与许多现实世界的应用更加接近。我们考虑了网络零和广泛表单游戏的天然类别,该游戏结合了代理收益的全球零和属性,图形游戏的有效表示以及EFG的表达能力。我们检查了这些游戏中乐观梯度上升(OGA)的收敛属性。我们证明,这种在线学习动力学的时间平均值表现出$ O(1/t)$ rate contergence convergence contergence contergence。此外,我们表明,对于某些与游戏有关的常数$ c> 0 $,日常行为也与速率$ o(c^{ - t})$收敛到nash。
translated by 谷歌翻译
在随机上下文的强盗设置中,对遗憾最小化算法进行了广泛的研究,但是他们的实例最少的最佳武器识别对应物仍然很少研究。在这项工作中,我们将重点关注$(\ epsilon,\ delta)$ - $ \ textit {pac} $设置:给定策略类$ \ pi $,学习者的目标是返回策略的目标, $ \ pi \ in \ pi $的预期奖励在最佳政策的$ \ epsilon $之内,概率大于$ 1- \ delta $。我们表征了第一个$ \ textit {实例依赖性} $ PAC样品通过数量$ \ rho _ {\ pi} $的上下文匪徒的复杂性,并根据$ \ rho _ {\ pi} $提供匹配的上和下限不可知论和线性上下文最佳武器标识设置。我们表明,对于遗憾的最小化和实例依赖性PAC而言,无法同时最小化算法。我们的主要结果是一种新的实例 - 最佳和计算有效算法,该算法依赖于多项式呼叫对Argmax Oracle的调用。
translated by 谷歌翻译
在博弈论中的精髓结果是von Neumann的Minmax定理,这些定理使得零和游戏承认基本上独特的均衡解决方案。古典学习结果对本定理构建,以表明在线无后悔动态会聚到零和游戏中的时间平均意义上的均衡。在过去几年中,一个关键的研究方向专注于表征这种动态的日常行为。一般结果在这个方向上表明,广泛的在线学习动态是循环的,并且在零和游戏中正式的Poincar {e}复发。在具有时间不变均衡的定期零和游戏的情况下,我们分析了这些在线学习行为的稳健性。该模型概括了通常的重复游戏制定,同时也是参与者之间反复竞争的现实和自然模型,这取决于外源性环境变化,如日期效果,周到一周的趋势和季节性。有趣的是,即使在最简单的这种情况下,也可能失败的时间平均收敛性,尽管有均衡是固定的。相比之下,使用新颖的分析方法,我们表明Poincar \'{E}尽管这些动态系统的复杂性,非自主性质,但是普及的复发概括。
translated by 谷歌翻译
We demonstrate a proof-of-concept of a large language model conducting corporate lobbying related activities. We use an autoregressive large language model (OpenAI's text-davinci-003) to determine if proposed U.S. Congressional bills are relevant to specific public companies and provide explanations and confidence levels. For the bills the model deems as relevant, the model drafts a letter to the sponsor of the bill in an attempt to persuade the congressperson to make changes to the proposed legislation. We use hundreds of ground-truth labels of the relevance of a bill to a company to benchmark the performance of the model, which outperforms the baseline of predicting the most common outcome of irrelevance. However, we test the ability to determine the relevance of a bill with the previous OpenAI GPT-3 model (text-davinci-002), which was state-of-the-art on many language tasks until text-davinci-003 was released on November 28, 2022. The performance of text-davinci-002 is worse than simply always predicting that a bill is irrelevant to a company. These results suggest that, as large language models continue to improve core natural language understanding capabilities, performance on corporate lobbying related tasks will continue to improve. We then discuss why this could be problematic for societal-AI alignment.
translated by 谷歌翻译
In the past years, deep learning has seen an increase of usage in the domain of histopathological applications. However, while these approaches have shown great potential, in high-risk environments deep learning models need to be able to judge their own uncertainty and be able to reject inputs when there is a significant chance of misclassification. In this work, we conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole-Slide-Images under domain shift using the H\&E stained Camelyon17 breast cancer dataset. Although it is known that histopathological data can be subject to strong domain shift and label noise, to our knowledge this is the first work that compares the most common methods for uncertainty estimation under these aspects. In our experiments, we compare Stochastic Variational Inference, Monte-Carlo Dropout, Deep Ensembles, Test-Time Data Augmentation as well as combinations thereof. We observe that ensembles of methods generally lead to higher accuracies and better calibration and that Test-Time Data Augmentation can be a promising alternative when choosing an appropriate set of augmentations. Across methods, a rejection of the most uncertain tiles leads to a significant increase in classification accuracy on both in-distribution as well as out-of-distribution data. Furthermore, we conduct experiments comparing these methods under varying conditions of label noise. We observe that the border regions of the Camelyon17 dataset are subject to label noise and evaluate the robustness of the included methods against different noise levels. Lastly, we publish our code framework to facilitate further research on uncertainty estimation on histopathological data.
translated by 谷歌翻译
In large-scale machine learning, recent works have studied the effects of compressing gradients in stochastic optimization in order to alleviate the communication bottleneck. These works have collectively revealed that stochastic gradient descent (SGD) is robust to structured perturbations such as quantization, sparsification, and delays. Perhaps surprisingly, despite the surge of interest in large-scale, multi-agent reinforcement learning, almost nothing is known about the analogous question: Are common reinforcement learning (RL) algorithms also robust to similar perturbations? In this paper, we investigate this question by studying a variant of the classical temporal difference (TD) learning algorithm with a perturbed update direction, where a general compression operator is used to model the perturbation. Our main technical contribution is to show that compressed TD algorithms, coupled with an error-feedback mechanism used widely in optimization, exhibit the same non-asymptotic theoretical guarantees as their SGD counterparts. We then extend our results significantly to nonlinear stochastic approximation algorithms and multi-agent settings. In particular, we prove that for multi-agent TD learning, one can achieve linear convergence speedups in the number of agents while communicating just $\tilde{O}(1)$ bits per agent at each time step. Our work is the first to provide finite-time results in RL that account for general compression operators and error-feedback in tandem with linear function approximation and Markovian sampling. Our analysis hinges on studying the drift of a novel Lyapunov function that captures the dynamics of a memory variable introduced by error feedback.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Research on automated essay scoring has become increasing important because it serves as a method for evaluating students' written-responses at scale. Scalable methods for scoring written responses are needed as students migrate to online learning environments resulting in the need to evaluate large numbers of written-response assessments. The purpose of this study is to describe and evaluate three active learning methods than can be used to minimize the number of essays that must be scored by human raters while still providing the data needed to train a modern automated essay scoring system. The three active learning methods are the uncertainty-based, the topological-based, and the hybrid method. These three methods were used to select essays included as part of the Automated Student Assessment Prize competition that were then classified using a scoring model that was training with the bidirectional encoder representations from transformer language model. All three active learning methods produced strong results, with the topological-based method producing the most efficient classification. Growth rate accuracy was also evaluated. The active learning methods produced different levels of efficiency under different sample size allocations but, overall, all three methods were highly efficient and produced classifications that were similar to one another.
translated by 谷歌翻译